# Natural Language Inference

Switch Base 8 Mnli
Apache-2.0
This is a text classification model fine-tuned on the GLUE MNLI dataset based on google/switch-base-8, primarily used for natural language inference tasks.
Text Classification Transformers English
S
glamprou
17
0
Nli Entailment Verifier Xxl
An NLI model fine-tuned based on flan-t5-xxl, used to verify whether a premise supports a hypothesis, specially optimized for multi-sentence premise scenarios
Large Language Model Transformers English
N
soumyasanyal
164
5
Camembert Base Xnli
MIT
Fine-tuned on the French portion of the XNLI dataset based on the Camembert-base model, supporting French zero-shot classification
Text Classification Transformers Supports Multiple Languages
C
mtheo
72
5
Roberta Base Mnli Uf Ner 1024 Train V0
MIT
A fine-tuned version of the RoBERTa-base model on the MNLI dataset, suitable for natural language inference tasks
Large Language Model Transformers
R
mariolinml
26
1
Mt0 Xxl
Apache-2.0
mt0-xxl is a multilingual large model from the BLOOMZ series, supporting cross-lingual task processing in 46 languages, fine-tuned on the xP3 dataset, and excels in zero-shot cross-lingual instruction execution
Large Language Model Transformers Supports Multiple Languages
M
bigscience
1,914
60
Bert Tiny Finetuned Glue Rte
Apache-2.0
This is a text classification model based on the BERT-tiny architecture, fine-tuned on the GLUE RTE task, primarily used for textual entailment recognition tasks.
Text Classification Transformers
B
muhtasham
37
1
Roberta Large Wanli
A roberta-large model fine-tuned on the WANLI dataset for natural language inference tasks, outperforming roberta-large-mnli on multiple out-of-domain test sets.
Text Classification Transformers English
R
alisawuffles
1,195
9
Bertin Roberta Base Finetuning Esnli
Spanish sentence embedding model based on BERTIN RoBERTa, optimized for natural language inference tasks
Text Embedding Spanish
B
somosnlp-hackathon-2022
103
7
Nli Deberta V3 Large
Apache-2.0
A cross-encoder model based on microsoft/deberta-v3-large architecture, trained for natural language inference tasks on SNLI and MultiNLI datasets.
Text Classification Transformers English
N
navteca
24
3
Deberta Xlarge Mnli
MIT
DeBERTa-XLarge-MNLI is an enhanced BERT model based on the disentangled attention mechanism, fine-tuned on the MNLI task with 750M parameters, excelling in natural language understanding tasks.
Large Language Model Transformers English
D
microsoft
833.58k
19
Nli Distilroberta Base
Apache-2.0
A cross-encoder based on DistilRoBERTa for natural language inference tasks, capable of determining the relationship between sentence pairs (contradiction, entailment, or neutral).
Text Classification English
N
cross-encoder
26.81k
24
Deberta V3 Large Finetuned Mnli
MIT
DeBERTa-v3-large model fine-tuned on GLUE MNLI dataset for natural language inference tasks, achieving 90% accuracy on the validation set
Text Classification Transformers English
D
mrm8488
31
2
Roberta Large Mnli
Roberta-large model trained on the MNLI dataset for natural language inference tasks.
Large Language Model Transformers
R
prajjwal1
31
0
Bert Base Cased Finetuned Mnli
Apache-2.0
A text classification model fine-tuned on the GLUE MNLI dataset based on bert-base-cased, designed for natural language inference tasks
Text Classification Transformers English
B
gchhablani
84
2
Ruperta Base Finetuned Pawsx Es
This model is based on the RuPERTa-base architecture and fine-tuned on the PAWS-X-es Spanish dataset for natural language inference, specifically designed to identify paraphrase relationships between sentence pairs.
Text Classification Supports Multiple Languages
R
mrm8488
14
0
Bert Base Uncased Finetuned Rte
Apache-2.0
A text classification model fine-tuned on the GLUE RTE task based on the BERT base model
Text Classification Transformers
B
anirudh21
86
0
Bert Medium Mnli
This model is a PyTorch pretrained model obtained by converting TensorFlow checkpoints from the official Google BERT repository, trained on the MNLI dataset for natural language inference tasks.
Large Language Model
B
prajjwal1
415
1
Qnli Electra Base
Apache-2.0
This is a cross-encoder model based on the ELECTRA architecture, specifically designed for natural language inference (NLI) in question-answering tasks, determining whether a given question can be answered by a specific paragraph.
Question Answering System Transformers English
Q
cross-encoder
6,172
3
Squeezebert Mnli
SqueezeBERT is a lightweight version of BERT, optimized to reduce computational resource requirements while maintaining high performance in natural language understanding.
Large Language Model Transformers English
S
typeform
37
4
Albert Base V2 Mnli
This model investigates generalization capabilities in Natural Language Inference (NLI) tasks, exploring how to move beyond simple heuristic methods.
Large Language Model Transformers
A
prajjwal1
29
0
Bart Large Mnli Bewgle
BART-large-MNLI is a sequence classification model trained by Facebook, based on the BART architecture, specifically designed for natural language inference tasks.
Large Language Model Transformers
B
bewgle
23
0
Albert Base V1 Mnli
This model focuses on the Natural Language Inference (NLI) task, aiming to surpass simple heuristic methods and improve the model's generalization capability in complex reasoning scenarios.
Large Language Model Transformers
A
prajjwal1
23
0
Deberta Base Mnli
MIT
Enhanced BERT decoding model based on disentangled attention mechanism, fine-tuned on MNLI task
Large Language Model English
D
microsoft
96.92k
6
Bert Small Mnli
This model is a PyTorch pretrained model obtained by converting TensorFlow checkpoints from the official Google BERT repository, originating from the paper 'Well-Read Students Learn Better: On the Importance of Pre-training Compact Models,' and trained on the MNLI dataset.
Large Language Model
B
prajjwal1
29
0
Distilbert Base Uncased Finetuned Mnli
Apache-2.0
This model is a text classification model fine-tuned on the MNLI task of GLUE based on distilbert-base-uncased, primarily used for natural language inference tasks.
Text Classification Transformers
D
blizrys
23
0
Albert Base V2 Finetuned Rte
Apache-2.0
This model is a text classification model based on the ALBERT base version (albert-base-v2) fine-tuned on the RTE task of the GLUE dataset, primarily used for textual entailment recognition tasks.
Text Classification Transformers
A
anirudh21
15
0
Rubert Tiny Bilingual Nli
A Russian natural language inference model fine-tuned from rubert-tiny for predicting logical relationships between texts
Text Classification Transformers Other
R
cointegrated
122
8
Albert Xlarge V2 Finetuned Wnli
Apache-2.0
This model is a text classification model fine-tuned on the WNLI task of GLUE based on ALBERT-xlarge-v2
Text Classification Transformers
A
anirudh21
31
0
Albert Base V2 Mnli
Apache-2.0
A text classification model fine-tuned on the GLUE MNLI dataset based on ALBERT-base-v2, designed for natural language inference tasks
Text Classification Transformers English
A
Alireza1044
18
0
Zeroshot Selectra Small
Apache-2.0
A zero-shot classifier based on SELECTRA, fine-tuned on the Spanish portion of the XNLI dataset, suitable for zero-shot classification tasks.
Text Classification Transformers Supports Multiple Languages
Z
Recognai
599
5
Bert Base Uncased Mnli Sparse 70 Unstructured No Classifier
This model is fine-tuned from bert-base-uncased-sparse-70-unstructured on the MNLI task (GLUE benchmark), with the classifier layer removed for easier loading into other downstream tasks for training.
Large Language Model Transformers English
B
Intel
17
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase